Skip to content

Add support for FLUX.2 LOKR models#88

Merged
lstein merged 1 commit intofeature/lokr-flux2from
copilot/support-flux2-lokr-models
Feb 24, 2026
Merged

Add support for FLUX.2 LOKR models#88
lstein merged 1 commit intofeature/lokr-flux2from
copilot/support-flux2-lokr-models

Conversation

Copy link

Copilot AI commented Feb 24, 2026

Summary

Adds support for LOKR (Kronecker product LoRA) models designed for FLUX.2 Klein 4B/9B. InvokeAI now correctly recognizes the base model and variant for these models and can generate images using them.

Model detection changes (invokeai/backend/model_manager/configs/lora.py):

  • Added _lokr_in_dim() and _lokr_out_dim() helpers to compute LOKR layer dimensions from Kronecker product tensors
  • Updated _is_flux2_lora_state_dict() to detect FLUX.2 Klein shapes from LOKR keys in BFL and Kohya formats
  • Updated _get_flux2_lora_variant() to detect Klein 4B vs 9B variant from LOKR key shapes
  • Updated _validate_looks_like_lora() heuristic to include LOKR and LoHA key suffixes

Model loading changes (invokeai/backend/patches/lora_conversions/flux_bfl_peft_lora_conversion_utils.py):

  • Extended is_state_dict_likely_in_flux_bfl_peft_format() to accept LyCORIS algorithm suffixes (lokr_w1, lokr_w2, hada_w1_a, etc.) in addition to PEFT suffixes
  • Added _split_bfl_key() helper to correctly parse keys with single-component suffixes (e.g. lokr_w1) vs two-component suffixes (e.g. lora_A.weight)
  • Updated lora_model_from_flux_bfl_peft_state_dict() and lora_model_from_flux2_bfl_peft_state_dict() to use _split_bfl_key()
  • Added _split_qkv_lokr() for FLUX.2 Klein LOKR QKV layer splitting — computes the full Kronecker product weight and splits it into separate Q/K/V full-weight (diff) layers; for factorized LOKR the alpha/rank scale is baked into the weight since FullLayer always uses scale=1.0
  • Updated _convert_bfl_layer_to_diffusers() to dispatch QKV splits for both LoRA and LOKR layer types

Bug fixes:

  • (invokeai/backend/patches/lora_conversions/flux_aitoolkit_lora_conversion_utils.py) Fixed a crash on generation where BFL-format LOKR models were misidentified as AIToolkit-format LoRAs. The AIToolkit detector matched any model with a diffusion_model.double_blocks. prefix (when no metadata is present), and ran before the BFL PEFT detector. Added an exclusion: if any key ends with LyCORIS-specific suffixes (lokr_w1, lokr_w2, hada_w1_a, etc.), the model is not AIToolkit format.
  • (invokeai/backend/patches/lora_conversions/flux_bfl_peft_lora_conversion_utils.py) Fixed spurious Unexpected keys: {'alpha'} warnings during generation. The _split_qkv_lokr() function was incorrectly passing alpha to FullLayer.from_state_dict_values(), which only handles {"diff", "diff_b"}.

Related Issues / Discussions

QA Instructions

  1. Download a FLUX.2 Klein LOKR model (e.g. https://civitai.com/models/1972981/sex-nudes-other-fun-stuff-snofs)
  2. Install it in InvokeAI — it should be recognized as flux2 base with the correct Klein variant (4B or 9B)
  3. Generate an image using that LOKR — generation should complete without errors and without Unexpected keys warnings in the log

Merge Plan

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • ❗Changes to a redux slice have a corresponding migration
  • Documentation added / updated (if applicable)
  • Updated What's New copy (if doing a release after this PR)
Original prompt

This section details on the original issue you should resolve

<issue_title>[enhancement]: Support FLUX.2 LOKR models</issue_title>
<issue_description>### Is there an existing issue for this?

  • I have searched the existing issues

Contact Details

No response

What should this feature add?

I would like to be able to download and install LOKR models designed for FLUX.2 Klein 4b/9b, such as the one located here: https://civitai.com/models/1972981/sex-nudes-other-fun-stuff-snofs . InvokeAI should correctly recognize the base model and variant, and I should be able to generate images using this LOKR.

Alternatives

No response

Additional Content

No response</issue_description>

Comments on the Issue (you are @copilot in this section)


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copy link
Owner

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When loading the LOKR file at https://civitai.com/models/1972981/sex-nudes-other-fun-stuff-snofs, the file is identified correctly as Flux.2 Klein, but I am getting this stack trace on generation:

[2026-02-24 09:20:47,601]::[InvokeAI]::ERROR --> Error while invoking session 80cad9d2-b9c2-41aa-a831-7b94e70b2c92, invocation 246fa67e-049a-45b2-bd0e-b18bf432a56d (flux2_klein_text_encoder): Unsupported lora format: dict_keys(['proj.alpha', 'proj.lokr_w1', 'proj.lokr_w2', 'qkv.alpha', 'qkv.lokr_w1', 'qkv.lokr_w2'])
[2026-02-24 09:20:47,601]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/services/session_processor/session_processor_default.py", line 130, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/baseinvocation.py", line 244, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/flux2_klein_text_encoder.py", line 76, in invoke
    qwen3_embeds, pooled_embeds = self._encode_prompt(context)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/flux2_klein_text_encoder.py", line 116, in _encode_prompt
    exit_stack.enter_context(
  File "/home/lstein/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 526, in enter_context
    result = _enter(cm)
             ^^^^^^^^^^
  File "/home/lstein/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 137, in __enter__
    return next(self.gen)
           ^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/backend/patches/layer_patcher.py", line 39, in apply_smart_model_patches
    for patch, patch_weight in patches:
                               ^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/flux2_klein_text_encoder.py", line 216, in _lora_iterator
    lora_info = context.models.load(lora.lora)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/services/shared/invocation_context.py", line 392, in load
    return self._services.model_manager.load.load_model(model, submodel_type)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/services/model_load/model_load_default.py", line 71, in load_model
    ).load_model(model_config, submodel_type)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/backend/model_manager/load/load_default.py", line 59, in load_model
    cache_record = self._load_and_cache(model_config, submodel_type)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/backend/model_manager/load/load_default.py", line 104, in _load_and_cache
    loaded_model = self._load_model(config, submodel_type)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/backend/model_manager/load/model_loaders/lora.py", line 136, in _load_model
    model = lora_model_from_flux_aitoolkit_state_dict(state_dict=state_dict)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/backend/patches/lora_conversions/flux_aitoolkit_lora_conversion_utils.py", line 79, in lora_model_from_flux_aitoolkit_state_dict
    layers[FLUX_LORA_TRANSFORMER_PREFIX + layer_key] = any_lora_layer_from_state_dict(layer_state_dict)
                                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/backend/patches/layers/utils.py", line 35, in any_lora_layer_from_state_dict
    raise ValueError(f"Unsupported lora format: {state_dict.keys()}")
ValueError: Unsupported lora format: dict_keys(['proj.alpha', 'proj.lokr_w1', 'proj.lokr_w2', 'qkv.alpha', 'qkv.lokr_w1', 'qkv.lokr_w2'])

Copy link
Author

Copilot AI commented Feb 24, 2026

When loading the LOKR file at https://civitai.com/models/1972981/sex-nudes-other-fun-stuff-snofs, the file is identified correctly as Flux.2 Klein, but I am getting this stack trace on generation:

``...

Fixed in 43f65b5. The model was being misidentified as AIToolkit format because is_state_dict_likely_in_flux_aitoolkit_format returned True for any model with diffusion_model.double_blocks. prefix (when no metadata is present), which includes BFL-format LOKR models. The AIToolkit format dispatcher runs before the BFL PEFT dispatcher, so it would grab the LOKR model and fail when trying to build standard LoRA layers from it.

The fix adds an early exclusion in is_state_dict_likely_in_flux_aitoolkit_format: if any key ends with LyCORIS-specific suffixes (lokr_w1, lokr_w2, hada_w1_a, etc.), it returns False, letting the BFL PEFT converter handle it correctly.

Copilot AI changed the title [WIP] Add support for FLUX.2 LOKR models Add support for FLUX.2 LOKR models Feb 24, 2026
Copilot AI requested a review from lstein February 24, 2026 14:35
Copy link
Owner

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test LOKR is now recognized correctly and renders. However, I am seeing a lot of these warnings:
[2026-02-24 09:40:54,404]::[InvokeAI]::WARNING --> Unexpected keys found in LoRA/LyCORIS layer, model might work incorrectly! Unexpected keys: {'alpha'}

Copy link
Author

Copilot AI commented Feb 24, 2026

The test LOKR is now recognized correctly and renders. However, I am seeing a lot of these warnings:
`[2026-02-24 09:40:54,404]::[InvokeAI]::WARNING --> Unexpected keys found in LoRA/LyCORIS layer, mo...

Fixed in e7681ee. The warning came from _split_qkv_lokr passing alpha to FullLayer.from_state_dict_values, which only handles {"diff", "diff_b"}. The fix removes alpha from the split diff layer state dicts. For factorized LOKR (with w1_a/w1_b), the alpha/rank scale is now baked directly into the diff weight, since FullLayer.scale() always returns 1.0.

Copilot AI requested a review from lstein February 24, 2026 14:49
@lstein lstein marked this pull request as ready for review February 24, 2026 15:22
Copy link
Owner

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works perfectly now!

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>

Fix BFL LOKR models being misidentified as AIToolkit format

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>

Fix alpha key warning in LOKR QKV split layers

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
@lstein lstein force-pushed the copilot/support-flux2-lokr-models branch from 48295b8 to b646b2c Compare February 24, 2026 22:43
@lstein lstein merged commit c5d707c into feature/lokr-flux2 Feb 24, 2026
13 checks passed
@lstein lstein deleted the copilot/support-flux2-lokr-models branch February 24, 2026 22:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants